18 research outputs found

    HapPart: partitioning algorithm for multiple haplotyping from haplotype conflict graph

    Get PDF
    Each chromosome in the human genome has two copies. The haplotype assembly challenge entails reconstructing two haplotypes (chromosomes) using aligned fragments genomic sequence. Plants viz. wheat, paddy and banana have more than two chromosomes. Multiple haplotype reconstruction has been a major research topic. For reconstructing multiple haplotypes for a polyploid organism, several approaches have been designed. The researchers are still fascinated to the computational challenge. This article introduces a partitioning algorithm, HapPart for dividing the fragments into k-groups focusing on reducing the computational time. HapPart uses minimum error correction curve to determine the value of k at which the growth of gain measures for two consecutive values of k-multiplied by its diversity is maximum. Haplotype conflict graph is used for constructing all possible number of groups. The dissimilarity between two haplotypes represents the distance between two nodes in graph. For merging two nodes with the minimum distance between them this algorithm ensures minimum error among fragments in same group. Experimental results on real and simulated data show that HapPart can partition fragments efficiently and with less computational time

    A comparative study on 10 and 30-year simulation of CMIP5 decadal hindcast precipitation at catchment level

    Get PDF
    Early prediction of precipitation has many positive benefits as it enables longer time for proper planning and decision making especially for the water managers, agricultural stakeholders, and policy and decision-makers. However, due to ongoing climate change along with the chaotic nature of precipitation, a too early prediction may lead to inefficient planning and decision making due to higher uncertainty and poor skills of the predicted data as the climate models are imperfect replicas that needs continuous improvement to predict future change. To investigate the difference between the short (a decade) and near-term (30 years) time simulation, this study aimed to compare the performance of 10 and 30-year simulation of CMIP5 decadal hindcast data of 0.05 degree spatial resolution at catchment level. For this, monthly hindcast precipitation of five general circulation models (GCMs); MIROC4h, MRI-CGCM3, MPI-ESM-LR, MIROC5 and CMCC-CM were downloaded from the CMIP5 data portal. Firstly the model data were cut for the Australian region and then the unit of the GCMs data was converted to the millimetre. In the next step, the GCMs data were spatially interpolated onto 0.05-degree spatial resolution using the second-order conservative method by Climate Data Operator (CDO) tool. Monthly observed gridded data of 0.05-degree spatial resolution were collected from the Australian Bureau of Meteorology (BoM). In the last step, both the observed and GCMs data were cut for the Brisbane River catchment in Queensland, Australia. Models’ performances are assessed comparing with the corresponding observed values through four skill tests; mean bias, mean absolute error, anomaly correlation coefficient and index of agreement. The results show that, 30-year simulations have comparatively higher mean bias and lower skills than 10-year simulated data that seems relevant to ensemble numbers and the external forcing from increasing GHGs due longer simulation period

    Monthly Rainfall Prediction at Catchment Level with the Facebook Prophet Model Using Observed and CMIP5 Decadal Data

    No full text
    Early prediction of rainfall is important for the planning of agriculture, water infrastructure, and other socio-economic developments. The near-term prediction (e.g., 10 years) of hydrologic data is a recent development in GCM (General Circulation Model) simulations, e.g., the CMIP5 (Coupled Modelled Intercomparison Project Phase 5) decadal experiments. The prediction of monthly rainfall on a decadal time scale is an important step for catchment management. Previous studies have considered stochastic models using observed time series data only for rainfall prediction, but no studies have used GCM decadal data together with observed data at the catchment level. This study used the Facebook Prophet (FBP) model and six machine learning (ML) regression algorithms for the prediction of monthly rainfall on a decadal time scale for the Brisbane River catchment in Queensland, Australia. Monthly hindcast decadal precipitation data of eight GCMs (EC-EARTH MIROC4h, MRI-CGCM3, MPI-ESM-LR, MPI-ESM-MR, MIROC5, CanCM4, and CMCC-CM) were downloaded from the CMIP5 data portal, and the observed data were collected from the Australian Bureau of Meteorology. At first, the FBP model was used for predictions based on: (i) the observed data only; and (ii) a combination of observed and CMIP5 decadal data. In the next step, predictions were performed through ML regressions where CMIP5 decadal data were used as features and corresponding observed data were used as target variables. The prediction skills were assessed through several skill tests, including Pearson Correlation Coefficient (PCC), Anomaly Correlation Coefficient (ACC), Index of Agreement (IA), and Mean Absolute Error (MAE). Upon comparing the skills, this study found that predictions based on a combination of observed and CMIP5 decadal data through the FBP model provided better skills than the predictions based on the observed data only. The optimal performance of the FBP model, especially for the dry periods, was mainly due to its multiplicative seasonality function

    CMIP5 Decadal Precipitation at Catchment Level and Its Implication to Future Prediction

    No full text
    This study assesses the monthly precipitation of CMIP5 decadal experiment over Brisbane River catchment for a spatial resolution of 0.050 and then predicts the monthly precipitation for decadal timescale through a Bidirectional LSTM and Machine Learning Algorithms using GCMs and observed data. To use GCM data in this future prediction, investigations were carried out for a suitable spatial interpolation method, a better simulation period, model drifts, and drift correction alternatives based on different skill tests

    Comparing Spatial Interpolation Methods for CMIP5 Monthly Precipitation at Catchment Scale

    No full text
    Use of Regional Climate Models (RCMs) is prevalent in downscaling the large scale climate information from the General Circulation Models (GCMs) to local scale. But it is computationally intensive and requires application of a numerical weather prediction model. For more straightforward computation, spatial interpolation are commonly used to re-gridding the GCM data to local scales. There are many interpolation methods available, but mostly they are chosen randomly, especially for GCM data. This study compared eight interpolation methods (linear, bi-linear, nearest neighbour, distance weighted average, inverse distance weighted average, first-order conservative, second-order conservative and bi-cubic interpolation) for re-gridding of CMIP5 decadal experimental data to a catchment scale. For this, CMIP5 decadal precipitation data from three GCMs were collected and subset for Australia and then re-gridded to 0.05 degree spatial resolution matching with the observed gridded data. The re-gridded data were subset for Brisbane catchment in Queensland, Australia and a number of skill tests (root mean squared error, mean absolute error, correlation coefficient, Pearson correlation, Kendal’s tau correlation and index of agreement) were conducted for a selected observed point to check the performances of different interpolation methods. Additionally, temporal skills were computed over the entire catchment and compared. Based on the skill tests over the study area, the second-order conservative (SOC) method was found to be an appropriate choice for interpolating the gridded dataset

    Obstetrics and gynecology outpatient scenario of an Indian homeopathic hospital: A prospective, research-targeted study

    Get PDF
    The authors aimed to document prescriptions and clinical outcomes in routine homeopathic practice to short list promising areas of targeted research and efficacy trials of homeopathy in obstetrics and gynecology (O&G). Three homeopathic physicians participated in methodical data collection over a 3-month period in the O&G outpatient setting of The Calcutta Homeopathic Medical College and Hospital, West Bengal, India. A specifically designed Excel spreadsheet was used to record data on consecutive appointments, including date, patient identity, socioeconomic status, place of abode, religion, medical condition/complaint, whether chronic/acute, new/follow-up case, patient-assessed outcome (7-point Likert scale: −3 to +3), prescribed homeopathic medication, and whether other medication/s was being taken for the condition. These spreadsheets were submitted monthly for data synthesis and analysis. Data on 878 appointments (429 patients) were collected, of which 61% were positive, 20.8% negative, and 18.2% showed no change. Chronic conditions (93.2%) were chiefly encountered. A total of 434 medical conditions and 52 varieties were reported overall. The most frequently treated conditions were leucorrhea (20.5%), irregular menses (13.3%), dysmenorrhea (10%), menorrhagia (7.5%), and hypomenorrhea (6.3%). Strongly positive outcomes (+3/+2) were mostly recorded in oligomenorrhea (41.7%), leucorrhea (34.1%), polycystic ovary (33.3%), dysmenorrhea (28%), and irregular menses (22.2%). Individualized prescriptions predominated (95.6%). A total of 122 different medicines were prescribed in decimal (2.9%), centesimal (87.9%), and 50 millesimal potencies (4.9%). Mother tinctures and placebo were prescribed in 3.4% and 30.4% instances, respectively. Several instances of medicine-condition pairings were detected. This systematic recording cataloged the frequency and success rate of treating O&G conditions using homeopathy
    corecore